# 4-bit quantization optimization
Gemma 3 27B It Qat GGUF
The Gemma 3 27B IT model, introduced by Google, is suitable for various text generation and image understanding tasks, supporting a context length of 128k tokens and multimodal image processing.
Image-to-Text
G
lmstudio-community
41.35k
8
Flux1 Schnell Quantized
Apache-2.0
Flux.1 Q_4_k is a 4-bit quantized GGUF model developed by the Takara.ai research team, optimized for stable-diffusion.cpp, enabling efficient generation of high-quality images on low-end hardware.
Image Generation
F
takara-ai
29
3
Phi 3 Mini 4k Python
Apache-2.0
A Python code generation model fine-tuned based on unsloth/Phi-3-mini-4k-instruct-bnb-4bit, trained using Unsloth and TRL libraries with 2x speed improvement
Large Language Model English
P
theprint
175
1
Llama 3 NeuralPaca 8b
An optimized model based on Meta LLAMA-3-8B, trained using lazy-free optimization techniques and the Huggingface TRL library, achieving 2x speed improvement
Large Language Model
Transformers English

L
NeuralNovel
21
7
Featured Recommended AI Models